Human speech can be characterized by different components, including semantic content, speaker identity and prosodic information. Significant progress has been made in disentangling representations for semantic content and speaker identity in Automatic Speech Recognition (ASR) and speaker verification tasks respectively. However, it is still an open challenging research question to extract prosodic information because of the intrinsic association of different attributes, such as timbre and rhythm, and because of the need for unsupervised training schemes to achieve robust large-scale and speaker-independent ASR. The aim of this paper is to address the disentanglement of emotional prosody from speech based on unsupervised reconstruction. Specifically, we identify, design, implement and integrate three crucial components in our proposed speech reconstruction model Prosody2Vec: (1) a unit encoder that transforms speech signals into discrete units for semantic content, (2) a pretrained speaker verification model to generate speaker identity embeddings, and (3) a trainable prosody encoder to learn prosody representations. We first pretrain the Prosody2Vec representations on unlabelled emotional speech corpora, then fine-tune the model on specific datasets to perform Speech Emotion Recognition (SER) and Emotional Voice Conversion (EVC) tasks. Both objective and subjective evaluations on the EVC task suggest that Prosody2Vec effectively captures general prosodic features that can be smoothly transferred to other emotional speech. In addition, our SER experiments on the IEMOCAP dataset reveal that the prosody features learned by Prosody2Vec are complementary and beneficial for the performance of widely used speech pretraining models and surpass the state-of-the-art methods when combining Prosody2Vec with HuBERT representations. Some audio samples can be found on our demo website.
translated by 谷歌翻译
The task of emotion recognition in conversations (ERC) benefits from the availability of multiple modalities, as offered, for example, in the video-based MELD dataset. However, only a few research approaches use both acoustic and visual information from the MELD videos. There are two reasons for this: First, label-to-video alignments in MELD are noisy, making those videos an unreliable source of emotional speech data. Second, conversations can involve several people in the same scene, which requires the detection of the person speaking the utterance. In this paper we demonstrate that by using recent automatic speech recognition and active speaker detection models, we are able to realign the videos of MELD, and capture the facial expressions from uttering speakers in 96.92% of the utterances provided in MELD. Experiments with a self-supervised voice recognition model indicate that the realigned MELD videos more closely match the corresponding utterances offered in the dataset. Finally, we devise a model for emotion recognition in conversations trained on the face and audio information of the MELD realigned videos, which outperforms state-of-the-art models for ERC based on vision alone. This indicates that active speaker detection is indeed effective for extracting facial expressions from the uttering speakers, and that faces provide more informative visual cues than the visual features state-of-the-art models have been using so far.
translated by 谷歌翻译
声音是现实世界中最有用,最丰富的方式之一,同时可以通过可以放置在移动设备上的小型和便宜的传感器来感知不接触。尽管深度学习能够从多个感官输入中提取信息,但很少有声音控制和学习机器人动作。对于无监督的强化学习,预计代理人将积极地收集经验,并以一种自制的方式共同学习代表和政策。我们使用基于物理的声音模拟来构建逼真的机器人操作场景,并提出内在的好奇模块(ISCM)。 ISCM向加强学习者提供反馈,以学习强大的表示并奖励更有效的探索行为。我们在适应过程中对启用声音进行了启用的声音实验,并表明ISCM所学的表示形式优于仅视力基线的基本线和预训练的策略,可以在应用于下游任务时加速学习过程。
translated by 谷歌翻译
灵活地处理各种机器人动作语言翻译任务是机器人和人之间自然相互作用的必不可少的要求。以前的方法需要更改推理过程中每个任务的模型体系结构的配置,这破坏了多任务学习的前提。在这项工作中,我们提出了配对的门控自动编码器(PGAE),以在桌面对象操纵方案中的机器人动作和语言描述之间进行灵活翻译。我们通过将每个动作与包含信号通知翻译方向的信号的适当描述配对,以端到端的方式训练模型。在推断期间,我们的模型可以从动作转化为语言,反之亦然,根据给定的语言信号。此外,为了选择使用预算语言模型作为语言编码器,我们的模型有可能识别看不见的自然语言输入。我们模型的另一个功能是,它可以通过使用机器人演示来识别和模仿另一个代理的动作。该实验结果突出了我们方法的灵活双向翻译能力,同时又可以推广到相反剂的作用。
translated by 谷歌翻译
空间推理给智能代理带来了一个特殊的挑战,同时是他们在物理世界中成功互动和交流的先决条件。这样的推理任务是描述目标对象在通过相对方向的某些参考对象的固有方向方面的位置。在本文中,我们介绍了基于抽象对象的新型诊断视觉询问(VQA)数据集。我们的数据集允许对端到端VQA模型对地面相对方向的功能进行细粒度分析。同时,与现有数据集相比,模型培训需要少得多的计算资源,但产生可比甚至更高的性能。除了新数据集外,我们还基于在Grid-A-3D训练的两个端到端的VQA架构进行彻底评估。我们证明,在几个时期内,以相对方向进行推理所需的子任务,例如在场景中识别和定位对象并估算其内在方向,以直观的方式处理相对方向。
translated by 谷歌翻译
这项工作的目的是通过利用视频中的音频和视觉流的自然共同发生来研究语音重建(视频到音频)对语音重建(视频到音频)的影响。我们提出了Lipsound2,其包括编码器 - 解码器架构和位置感知注意机制,可直接将面部图像序列映射到熔化谱图,而无需任何人类注释。提出的Lipsound2模型首先在$ 2400H的$ 2400h多语言(例如英语和德语)视听数据(VoxceleB2)上进行预先培训。为了验证所提出的方法的概括性,我们将在与以前的方法相比,微调在域特定数据集(网格,TCD-Timit)上进行预先训练的模型,以实现对语音质量和可懂度的显着提高扬声器依赖和依赖的设置。除了英语外,我们还在CMLR数据集上进行中文语音重建,以验证对转移性的影响。最后,我们通过微调在预先训练的语音识别系统上产生生成的音频并在英语和中文基准数据集中实现最先进的性能来培训级联唇读(视频到文本)系统。
translated by 谷歌翻译
The release of ChatGPT, a language model capable of generating text that appears human-like and authentic, has gained significant attention beyond the research community. We expect that the convincing performance of ChatGPT incentivizes users to apply it to a variety of downstream tasks, including prompting the model to simplify their own medical reports. To investigate this phenomenon, we conducted an exploratory case study. In a questionnaire, we asked 15 radiologists to assess the quality of radiology reports simplified by ChatGPT. Most radiologists agreed that the simplified reports were factually correct, complete, and not potentially harmful to the patient. Nevertheless, instances of incorrect statements, missed key medical findings, and potentially harmful passages were reported. While further studies are needed, the initial insights of this study indicate a great potential in using large language models like ChatGPT to improve patient-centered care in radiology and other medical domains.
translated by 谷歌翻译
The SINDy algorithm has been successfully used to identify the governing equations of dynamical systems from time series data. In this paper, we argue that this makes SINDy a potentially useful tool for causal discovery and that existing tools for causal discovery can be used to dramatically improve the performance of SINDy as tool for robust sparse modeling and system identification. We then demonstrate empirically that augmenting the SINDy algorithm with tools from causal discovery can provides engineers with a tool for learning causally robust governing equations.
translated by 谷歌翻译
The application of Natural Language Processing (NLP) to specialized domains, such as the law, has recently received a surge of interest. As many legal services rely on processing and analyzing large collections of documents, automating such tasks with NLP tools emerges as a key challenge. Many popular language models, such as BERT or RoBERTa, are general-purpose models, which have limitations on processing specialized legal terminology and syntax. In addition, legal documents may contain specialized vocabulary from other domains, such as medical terminology in personal injury text. Here, we propose LegalRelectra, a legal-domain language model that is trained on mixed-domain legal and medical corpora. We show that our model improves over general-domain and single-domain medical and legal language models when processing mixed-domain (personal injury) text. Our training architecture implements the Electra framework, but utilizes Reformer instead of BERT for its generator and discriminator. We show that this improves the model's performance on processing long passages and results in better long-range text comprehension.
translated by 谷歌翻译
Point-of-Care Ultrasound (POCUS) refers to clinician-performed and interpreted ultrasonography at the patient's bedside. Interpreting these images requires a high level of expertise, which may not be available during emergencies. In this paper, we support POCUS by developing classifiers that can aid medical professionals by diagnosing whether or not a patient has pneumothorax. We decomposed the task into multiple steps, using YOLOv4 to extract relevant regions of the video and a 3D sparse coding model to represent video features. Given the difficulty in acquiring positive training videos, we trained a small-data classifier with a maximum of 15 positive and 32 negative examples. To counteract this limitation, we leveraged subject matter expert (SME) knowledge to limit the hypothesis space, thus reducing the cost of data collection. We present results using two lung ultrasound datasets and demonstrate that our model is capable of achieving performance on par with SMEs in pneumothorax identification. We then developed an iOS application that runs our full system in less than 4 seconds on an iPad Pro, and less than 8 seconds on an iPhone 13 Pro, labeling key regions in the lung sonogram to provide interpretable diagnoses.
translated by 谷歌翻译